1,470 research outputs found

    How are emergent constraints quantifying uncertainty and what do they leave behind?

    Get PDF
    The use of emergent constraints to quantify uncertainty for key policy relevant quantities such as Equilibrium Climate Sensitivity (ECS) has become increasingly widespread in recent years. Many researchers, however, claim that emergent constraints are inappropriate or even under-report uncertainty. In this paper we contribute to this discussion by examining the emergent constraints methodology in terms of its underpinning statistical assumptions. We argue that the existing frameworks are based on indefensible assumptions, then show how weakening them leads to a more transparent Bayesian framework wherein hitherto ignored sources of uncertainty, such as how reality might differ from models, can be quantified. We present a guided framework for the quantification of additional uncertainties that is linked to the confidence we can have in the underpinning physical arguments for using linear constraints. We provide a software tool for implementing our general framework for emergent constraints and use it to illustrate the framework on a number of recent emergent constraints for ECS. We find that the robustness of any constraint to additional uncertainties depends strongly on the confidence we can have in the underpinning physics, allowing a future framing of the debate over the validity of a particular constraint around the underlying physical arguments, rather than statistical assumptions

    Efficient calibration for high-dimensional computer model output using basis methods

    Full text link
    Calibration of expensive computer models with high-dimensional output fields can be approached via history matching. If the entire output field is matched, with patterns or correlations between locations or time points represented, calculating the distance metric between observational data and model output for a single input setting requires a time intensive inversion of a high-dimensional matrix. By using a low-dimensional basis representation rather than emulating each output individually, we define a metric in the reduced space that allows the implausibility for the field to be calculated efficiently, with only small matrix inversions required, using projection that is consistent with the variance specifications in the implausibility. We show that projection using the L2L_2 norm can result in different conclusions, with the ordering of points not maintained on the basis, with implications for both history matching and probabilistic methods. We demonstrate the scalability of our method through history matching of the Canadian atmosphere model, CanAM4, comparing basis methods to emulation of each output individually, showing that the basis approach can be more accurate, whilst also being more efficient

    Feature calibration for computer models

    Full text link
    Computer model calibration involves using partial and imperfect observations of the real world to learn which values of a model's input parameters lead to outputs that are consistent with real-world observations. When calibrating models with high-dimensional output (e.g. a spatial field), it is common to represent the output as a linear combination of a small set of basis vectors. Often, when trying to calibrate to such output, what is important to the credibility of the model is that key emergent physical phenomena are represented, even if not faithfully or in the right place. In these cases, comparison of model output and data in a linear subspace is inappropriate and will usually lead to poor model calibration. To overcome this, we present kernel-based history matching (KHM), generalising the meaning of the technique sufficiently to be able to project model outputs and observations into a higher-dimensional feature space, where patterns can be compared without their location necessarily being fixed. We develop the technical methodology, present an expert-driven kernel selection algorithm, and then apply the techniques to the calibration of boundary layer clouds for the French climate model IPSL-CM.Comment: 50 page

    Prize-Collecting TSP with a Budget Constraint

    Get PDF
    We consider constrained versions of the prize-collecting traveling salesman and the minimum spanning tree problems. The goal is to maximize the number of vertices in the returned tour/tree subject to a bound on the tour/tree cost. We present a 2-approximation algorithm for these problems based on a primal-dual approach. The algorithm relies on finding a threshold value for the dual variable corresponding to the budget constraint in the primal and then carefully constructing a tour/tree that is just within budget. Thereby, we improve the best-known guarantees from 3+epsilon and 2+epsilon for the tree and the tour version, respectively. Our analysis extends to the setting with weighted vertices, in which we want to maximize the total weight of vertices in the tour/tree subject to the same budget constraint

    Quantifying spatio-temporal boundary condition uncertainty for the North American deglaciation

    Full text link
    Ice sheet models are used to study the deglaciation of North America at the end of the last ice age (past 21,000 years), so that we might understand whether and how existing ice sheets may reduce or disappear under climate change. Though ice sheet models have a few parameters controlling physical behaviour of the ice mass, they also require boundary conditions for climate (spatio-temporal fields of temperature and precipitation, typically on regular grids and at monthly intervals). The behaviour of the ice sheet is highly sensitive to these fields, and there is relatively little data from geological records to constrain them as the land was covered with ice. We develop a methodology for generating a range of plausible boundary conditions, using a low-dimensional basis representation of the spatio-temporal input. We derive this basis by combining key patterns, extracted from a small ensemble of climate model simulations of the deglaciation, with sparse spatio-temporal observations. By jointly varying the ice sheet parameters and basis vector coefficients, we run ensembles of the Glimmer ice sheet model that simultaneously explore both climate and ice sheet model uncertainties. We use these to calibrate the ice sheet physics and boundary conditions for Glimmer, by ruling out regions of the joint coefficient and parameter space via history matching. We use binary ice/no ice observations from reconstructions of past ice sheet margin position to constrain this space by introducing a novel metric for history matching to binary data

    Noninvasive measurements of arterial stiffness: Repeatability and interrelationships with endothelial function and arterial morphology measures

    Get PDF
    Corey J Huck1, Ulf G Bronas1, Eric B Williamson1, Christopher C Draheim1, Daniel A Duprez2, Donald R Dengel1,31School of Kinesiology, University of Minnesota, Minneapolis, MN, USA; 2Cardiovascular Division, Department of Medicine, University of Minnesota, Minneapolis, MN; 3Research Service, Minneapolis Veterans Affairs Medical Center, Minneapolis, MN, USABackground: Many noninvasive arterial assessment techniques have been developed, measuring different parameters of arterial stiffness and endothelial function. However, there is little data available comparing different devices within the same subject. Therefore, the purpose of this study was to examine the repeatability and interrelationships between 3 different techniques to measure arterial stiffness and to compare this with forearm-mediated dilation.Methods: Carotid-radial pulse wave velocity was measured by the Sphygmocor (SPWV) and Complior (CPWV) devices, cardio-ankle vascular index (CAVI) was measured by the VaSera device, vascular structure and function was assessed using ultrasonography and evaluated for reliability and compared in 20 apparently healthy, college-aged men and women.Results: The intraclass correlation coefficient and standard error of the mean for the Sphygmocor (R = 0.56, SEM = 0.69), Complior (R = 0.62, SEM = 0.69), and VaSera (R = 0.60, SEM = 0.56), indicated moderate repeatability. Bland-Altman plots indicated a mean difference of 0.11 ± 0.84 for SPWV, 0.13 ± 1.15 for CPWV, and –0.43 ± 0.90 for CAVI. No significant interrelationships were found among the ultrasound measures and SPWV, CPWV, and CAVI.Conclusions: The three noninvasive modalities to study arterial stiffness reliably measures arterial stiffness however, they do not correlate with ultrasound measures of vascular function and structure in young and apparently healthy subjects.Keywords: Pulse wave velocity, intima-media thickness, flow-mediated dilatio

    Meson-exchange currents and quasielastic predictions for charged-current neutrino-12C scattering in the superscaling approach

    Get PDF
    We evaluate and discuss the impact of meson-exchange currents (MECs) on charged-current quasielastic neutrino cross sections. We consider the nuclear transverse response arising from two-particle two-hole states excited by the action of electromagnetic, purely isovector meson-exchange currents in a fully relativistic framework based on the work by the Torino Collaboration [A. D. Pace, M. Nardi, W. M. Alberico, T. W. Donnelly, and A. Molinari, Nucl. Phys. A726, 303 (2003)]. An accurate parametrization of this MEC response as a function of the momentum and energy transfers involved is presented. Results of neutrino-nucleus cross sections using this MEC parametrization together with a recent scaling approach for the one-particle one-hole contributions (named SuSAv2) are compared with experimental data.DGI FIS2011-28738-C02-0Junta de Andalucía QM-160U.S. Department of Energy DE- SC001109DGI FIS2011-24149Junta de Andalucía FQM22
    • …
    corecore